Tag
8 articles
OpenAI releases a Child Safety Blueprint to combat the rise in child sexual exploitation linked to AI advancements.
AI tools are being weaponized on Telegram to create and distribute non-consensual intimate imagery, including deepfakes and automated archives, according to a new analysis of 2.8 million messages in Italy and Spain.
AI offensive cyber capabilities are doubling every 5.7 months, according to new research, raising serious concerns about digital security.
Learn why casual conversations with AI chatbots can have privacy implications and how to protect your personal information when using these tools.
Learn how to set up and use a password manager system that provides both security and convenience, from initialization to synchronization across devices.
YouTube expands AI deepfake detection to politicians, journalists, and officials, enabling them to flag unauthorized likenesses for removal. The move strengthens efforts to combat AI-generated misinformation.
Europe’s new social media regulations are reshaping how teens interact online, forcing platforms to adapt to stricter age verification and content controls.
OpenAI's February 2026 threat report reveals how malicious actors are combining AI models with websites and social platforms to conduct sophisticated attacks. The report highlights the growing challenge of detecting AI-powered deception and calls for enhanced defensive measures.